Two Bagging Algorithms with Coupled Learners to Encourage Diversity
نویسندگان
چکیده
In this paper, we present two ensemble learning algorithms which make use of boostrapping and out-of-bag estimation in an attempt to inherit the robustness of bagging to overfitting. As against bagging, with these algorithms learners have visibility on the other learners and cooperate to get diversity, a characteristic that has proved to be an issue of major concern to ensemble models. Experiments are provided using two regression problems obtained from UCI.
منابع مشابه
An Empirical Study of Bagging Predictors for Different Learning Algorithms
Bagging is a simple, yet effective design which combines multiple base learners to form an ensemble for prediction. Despite its popular usage in many real-world applications, existing research is mainly concerned with studying unstable learners as the key to ensure the performance gain of a bagging predictor, with many key factors remaining unclear. For example, it is not clear when a bagging p...
متن کاملCombining Bagging and Additive Regression
Bagging and boosting are among the most popular resampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noisefree data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in t...
متن کاملLocal Negative Correlation with Resampling
This paper deals with a learning algorithm which combines two well known methods to generate ensemble diversity error negative correlation and resampling. In this algorithm, a set of learners iteratively and synchronously improve their state considering information about the performance of a fixed number of other learners in the ensemble, to generate a sort of local negative correlation. Resamp...
متن کاملCombining Bagging, Boosting and Random Subspace Ensembles for Regression Problems
Bagging, boosting and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-regressor. In this work, we built an ensemble of bagging, boosting and random subspace methods ensembles with 8 sub-regressors in each one and then an averaging methodology is used for the final prediction. We ...
متن کاملA genetic approach for training diverse classifier ensembles
Classification is an active topic of Machine Learning. The most recent achievements in this domain suggest using ensembles of learners instead of a single classifier to improve classification accuracy. Comparisons between Bagging and Boosting show that classifier ensembles perform better when their members exhibit diversity, that is commit different errors. This paper proposes a genetic algorit...
متن کامل